A significance level (α) is the probability that the test statistic (e.g. µ) will fall in the critical region when the null hypothesis is actually true. Typical values for α are 0.01, 0.05 and 0.10.
Significance level is defined as the probability of making a decision to reject the null hypothesis when the null hypothesis is actually true (a decision known as a Type I error). The decision is often made using the p-value: if the p-value is less than the significance level, then the null hypothesis is rejected. The smaller the p-value, the more significant the result is said to be.[1]
The significance level is set by the investigator in relation to the criticality of the experiment. The more critical the result of the experiment is, the more important it is to protect the null hypothesis and choose smaller significance level.
The significance level used in a space mission hypotheses test needs to be smaller than the one used in a test to determine average income of dentists in a given geography.
A critical value is any value that separates the critical region from values of the test statistic that would not cause us to reject the null hypothesis.
It is the value which divides the rejection region from the non-rejection region.
A critical value is the value that a test statistic must exceed in order for the null hypothesis to be rejected. Critical values are used at the decision step to determine whether the null hypothesis is rejected or not.
The critical value for any hypothesis test depends on the significance level at which the test is carried out, and whether the test is one-tailed or two-tailed.[2]